Optimization Methods in Banach Spaces
نویسندگان
چکیده
In this chapter we present a selection of important algorithms for optimization problems with partial differential equations. The development and analysis of these methods is carried out in a Banach space setting. We begin by introducing a general framework for achieving global convergence. Then, several variants of generalized Newton methods are derived and analyzed. In particular, necessary and sufficient conditions for fast local convergence are derived. Based on this, the concept of semismooth Newton methods for operator equations is introduced. It is shown how complementarity conditions, variational inequalities, and optimality systems can be reformulated as semismooth operator equations. Applications to constrained optimal control problems are discussed, in particular for elliptic partial differential equations and for flow control problems governed by the incompressible instationary Navier-Stokes equations. As a further important concept, the formulation of optimality systems as generalized equations is addressed. We introduce and analyze the Josephy-Newton method for generalized equations. This provides an elegant basis for the motivation and analysis of sequential quadratic programming (SQP) algorithms. The chapter concludes with a short outline of recent algorithmic advances for state constrained problems and a brief discussion of several further aspects.
منابع مشابه
Mangasarian-Fromovitz and Zangwill Conditions For Non-Smooth Infinite Optimization problems in Banach Spaces
In this paper we study optimization problems with infinite many inequality constraints on a Banach space where the objective function and the binding constraints are Lipschitz near the optimal solution. Necessary optimality conditions and constraint qualifications in terms of Michel-Penot subdifferential are given.
متن کاملA Hybrid Proximal Point Algorithm for Resolvent operator in Banach Spaces
Equilibrium problems have many uses in optimization theory and convex analysis and which is why different methods are presented for solving equilibrium problems in different spaces, such as Hilbert spaces and Banach spaces. The purpose of this paper is to provide a method for obtaining a solution to the equilibrium problem in Banach spaces. In fact, we consider a hybrid proximal point algorithm...
متن کاملA Proximal Point Method for Nonsmooth Convex Optimization Problems in Banach Spaces
In this paper we show the weak convergence and stability of the proximal point method when applied to the constrained convex optimization problem in uniformly convex and uniformly smooth Banach spaces. In addition, we establish a nonasymptotic estimate of convergence rate of the sequence of functional values for the unconstrained case. This estimate depends on a geometric characteristic of the ...
متن کاملA proximal point method in nonreflexive Banach spaces
We propose an inexact version of the proximal point method and study its properties in nonreflexive Banach spaces which are duals of separable Banach spaces, both for the problem of minimizing convex functions and of finding zeroes of maximal monotone operators. By using surjectivity results for enlargements of maximal monotone operators, we prove existence of the iterates in both cases. Then w...
متن کاملMinimization of Nonsmooth Convex Functionals in Banach Spaces
We develop a uniied framework for convergence analysis of subgradient and subgradient projection methods for minimization of nonsmooth convex functionals in Banach spaces. The important novel features of our analysis are that we neither assume that the functional is uniformly or strongly convex, nor use regularization techniques. Moreover, no boundedness assumptions are made on the level sets o...
متن کاملRegularized learning in Banach spaces as an optimization problem: representer theorems
We view regularized learning of a function in a Banach space from its finite samples as an optimization problem. Within the framework of reproducing kernel Banach spaces, we prove the representer theorem for the minimizer of regularized learning schemes with a general loss function and a nondecreasing regularizer. When the loss function and the regularizer are differentiable, a characterization...
متن کامل